Compromise Strategies for Constraint Agents
نویسندگان
چکیده
We describe several joint problem solving strategies for a team of multi-interest constraint-based reasoning agents. Agents y/eld in one area of conflict in order to gain concessions in another area. For example, in a distributed meeting scheduling problem, individuals with different preferences may be willing to give up the choice of the location for the meeting if allowed to choose the time of the meeting. Compromise b nefits the overall solution quality by allowing each agent an opportunity to participate in the solution. We demonstrate the utility of this approach using a collection of agents collaborating on random graph coloring problems. We propose several simple mettics for evaluating solutions from the perspective of individual agents, and additional metrics for evaluating the solutions as compromises. Finally, we experimentally evaluate the performance of the strategies with respect to the metrics. Introduction Individuals may bring different priorities to a problem. How they can best work together to produce a compromise solution? This is, of course, an old issue in human affairs. Nowadays we envision computer agents grappling with this issue, perhaps representing our preferences. We address this issue here in the context of constraint satisfaction problems (CSPs), which are widely encountered in artificial intelligence. We propose several simple strategies for joint problem solving that lead to compromise solutions. We propose several simple metrics for evaluating solutions from the perspective of individual agents, and additional metrics for evaluating the solutions as compromises. Finally, we experimentally evaluate the performance of the strategies with respect to the metrics. CSPs are composed of variables, values and constraints. A solution assigns a value to each variable such that all the constraints, specifying acceptable value combinations, are satisfied. We provide a simple, initial model of preference where each agent has a ranking for the potential values for variables. We employ coloring problems here as a tested. Coloring problems require assigning colors to variables, where specified pairs of variables cannot have the same color. They model basic scheduling and resource allocation problems. Here we assume that each agent ranks the colors in order of preference. We use this domain to begin an investigation of questions like: What is a good compromise strategy if maximising the sum of the values assigned to the solution by the participants is less important than minimising the disparity among those values? (This is a situation familiar to any parent who has had to provide treats for several siblings.) We describe the compromise strategies, the individual metrics, and the compromise metrics in the following section. Next, we present our experiments, describing the experimental design and then graphing the behavior of the different strategies with respect to the different metrics. In the last section we briefly relate our work to previous work and suggestion directions for further research. Compromise Strategies and Metrics In this section we describe four joint problem solving strategies and evaluation metrics for constraint agents. The strategies we propose emphasise the emergence of a compromise solution among a team of agents. The strategies are useful when agents have competing goals but perceive an advantage to working together in a fair way and may be useful in application areas where user preferences play an important role, such as scheduling, intelligent user interfaces, and telecommunication. Strategies The joint problem solving strategies have the following characteristics: ̄ All group members participate in generating a problem solution. ̄ Agents are not guaranteed an individual optimal solution. ̄ The strategy doesn’t unfairly favor any agent. , There is no central controller monitoring the problem solving process. From: AAAI Technical Report WS-97-05. Compilation copyright © 1997, AAAI (www.aaai.org). All rights reserved. ̄ Agents consider only their own preferences when selecting a value to assign to a variable. Turn taking strategy Turn-taking is a simple strategy where agents take turns assigning values to variables. During problem solving the agent whose turn it is sends a message to the team containing the variable-value assignment. Agents perform backtrack search for the solution together. Each agent knows when a conflict occurs and they know the agent responsible for the variable assignment. At that point, the agent originally assigning the variable a value chooses again. When a solution is found, the agents report their individual metrics. Of course, turn-taking doesn’t guarantee an optimal solution for individual agents, but it does guarantee each an opportunity to make some variable assignments. Average preference strategy The average preference strategy is an a priori computation of the average preference utility for the values of each variable. The value selected for assignment has the highest average preference utility. When a solution is found agents compute their metrics based upon their original set of preference utilities. Concession strategy The concession strategy is a variation of turn-taking; agents maldng a variable assignment will concede their turn to another team member if the other agent has a higher preference utility for a variable-value assignment. Lowest score strategy During search agents track their scores for the current labeling of problem. When a variable is to be assigned a value the agent with the lowest score chooses based upon its own preferences. The average preference strategy may be appropriate in domains where agents are willing to exchange all their preferences. For example, agents in a course scheduling application represent the interests of teachers, students, and administrators; the scheduling preferences for each agent can be stated and shared prior to problem solving. An important goal in this domain is to achieve high solution qualities for the group while minimizing disparities among the agents. A strategy that supports information hiding, such as turn-taking, may be important in situations where the agents are willing or required to work together to solve problems but because of privacy reasons they are only willing to share partial information during problem solving. For example, agents from different telecommunication companies may be willing to cooperate to solve a routing problem but would like to do so by maximizing their own preferences while cooperating with others. Compromise strategies may also be important to intelligent user interface agents, such as (Maes 1994), (Sycara & Zeng 1996), and (Marc dreoli et ~l. 1995), where the primary purpose of the agent is representing the preferences and priorities of humans. Metrics We propose simple metrics to compare how well individual agents fare, how well the group performs when using each compromise strategy, and the efficiency of each strategy. The selection of an appropriate compromise strategy is dependent upon the type of group interaction desired and the type of measurements used. The sum and product metrics are similar to the metrics proposed by (Rosenschein & Zlotkin 1994) for evaluating 2-agent negotiation protocols. We consider the efficiency of the compromise strategies by comparing the number of constraint checks generated during problem solving. Individual metrics The product and sum measures are an indicator of an agent’s preference for a particular problem solution. The sum metric is the sum of the preference utilities for each variable-value assignment in the solution. The product metric is the sum of the log(preference utility of value) for each variable-value assignment in the solution. Group Metrics The compromise strategies are compared using the following metrics: ̄ median of individual metrics for each agent ̄ maximum solution quality of the group ̄ minimum solution quality of the group ̄ difference between maximum and minimum solution qualities (Max Min) ̄ number of constraint checks The solution quality for the above metrics can be computed by using either the product or sum measure of solution quality. The minimum sum and minimum product over the agents indicates the minimum solution quality generated by the agents on the team. The maximum and median metrics are used to determine which strategy returns the highest solution qualities for the team of agents. Teams of agents may also be concerned with the disparity among the solution qualities of the agents; (Max Min) is a good indicator of disparity. A strategy where team members have a similar solution quality will have a low (Max Min) score. The average number of constraint checks is an indicator of team problem solving efficiency of a particular strategy. Experiments We evaluate the utility of the proposed compromise strategies and metrics using solvable random coloring problems. Coloring problems are representative of scheduling and resource allocation problems. In these problems, colors must be assigned to variables so that related variables do not have the same color.
منابع مشابه
Automatic tuning of a behavior-based guidance algorithm for formation flight of quadrotors
This paper presents a tuned behavior-based guidance algorithm for formation flight of quadrotors. The behavior-based approach provides the basis for the simultaneous realization of different behaviors such as leader following and obstacle avoidance for a group of agents; in our case they are quadcopters. In this paper optimization techniques are utilized to tune the parameters of a behavior-bas...
متن کاملAgent Compromises in Distributed Problem Solving
ERA is a multi-agent oriented method for solving constraint satisfaction problems [5]. In this method, agents make decisions based on the information obtained from their environments in the process of solving a problem. Each agent has three basic behaviors: least-move, better-move, and random-move. The random-move is the unique behavior that may help the multi-agent system escape from a local m...
متن کاملStochastic Game Modelling for Distributed Constraint Reasoning with Privacy
Privacy has been a major concern for agents in distributed constraint reasoning. In this work, we approach this issue in distributed constraint reasoning by letting agents compromise solution quality for preserving privacy, using utility theory. We formalize privacy in the context of distributed constraint reasoning, detail its different aspects, and present model and solvers, as well as their ...
متن کاملThe Advantages of Compromising in Coalition Formation with Incomplete
This paper presents protocols and strategies for coalition formation with incomplete information under time constraints. It focuses on strategies for coalition members to distribute revenues amongst themselves. Such strategies should preferably be stable, lead to a fair distribution, and maximize the social welfare of the agents. These properties are only partially supported by existing coaliti...
متن کاملArtificial neural networks and multicriterion analysis for sustainable irrigation planning
The objective of the present paper is to select the best compromise irrigation planning strategy for the case study of Jayakwadi irrigation project, Maharashtra, India. Four-phase methodology is employed. In phase 1, separate linear programming (LP) models are formulated for the three objectives, namely, net economic benefits, agricultural production and labour employment. In phase 2, nondomina...
متن کامل